Generative AI has stormed into the enterprise toolkit.

According to the IDC Data and AI Impact Report: The Trust Imperative, commissioned by SAS, eight in ten organizations are already using it.

It’s become a social phenomenon, too. Many have used GenAI tools like Copilot and ChatGPT, talked about their prompts over dinner, watched a related video about it or experimented with similar tools. Many people are at least becoming more aware of “what it is.” Still, talk to anyone using it seriously and you’ll hear the same worry: Can we rely on it?

Bryan Harris, CTO of SAS, says there is a temptation to hand over trust just because a system feels “sophisticated”. Quantum AI, for example, has incredible potential, but in the report, nearly a quarter of organizations already said they trust its outcomes.

Harris’ response to that among a panel of AI experts: not so fast. “We need to get out of this mindset of giving something trust just because it’s sophisticated. These systems have to earn it – the trust of our decision-making and the outcomes for businesses.”

That tension is a technical footnote and a microcosm of a larger challenge: the trust dilemma.

The trust dilemma in plain terms

The report introduces three core metrics to understand AI adoption:

  • Trustworthy AI Index: Measures how organizations invest in responsible, reliable and ethical AI practices.
  • Impact Index: Quantifies tangible business value from AI, including productivity, innovation, customer experience and financial returns.
  • The Trust Dilemma: Highlights misalignment between perceived trust and actual system trustworthiness.

For now, let’s focus on the trust dilemma. Nearly 46% of organizations find themselves in this situation, either underutilizing reliable systems because confidence is too low or over-relying on unproven systems because confidence is too high.

By the numbers:

  • 78% of organizations say they have complete trust in AI
  • Only 40% demonstrate advanced or higher levels of AI trustworthiness

Nearly half of AI potential is left untapped because trust doesn’t match reality. Data and AI Impact Report

It’s a simple idea with big implications: excitement and adoption alone won’t deliver results. Without governance, transparency and solid infrastructure, AI can’t reach its potential.

From generative to agentic

It’s tempting to try to fix the models – to push generative AI into domains where precision is nonnegotiable. Harris proposes a different approach: let generative AI shine where it excels and hand off the heavy lifting to systems that have already earned trust.

The future is in using agentic AI to orchestrate the right tool for the right task:

  • Large language models make data approachable through natural language.
  • Trusted quantitative systems provide rigor and reliability.
  • Agentic workflows connect the two, delivering answers that are both usable and accurate.

The report underscores this: agentic AI progress will stall if cloud data environments are nonoptimized, governance is weak or talent is lacking. Building a foundation of trust is a prerequisite for meaningful adoption.

We need to get out of this mindset of giving something trust just because it’s sophisticated. These systems have to earn it – the trust of our decision-making and the outcomes for businesses. Bryan Harris

The supply chain of a decision

Consider building a system to evaluate airlines purely on merit. Even with the best intentions, a large language model can introduce subtle biases: it might favor or disfavor an airline based on general narratives about it, rather than the airline’s actual performance.

Harris frames this challenge in terms of a "decision supply chain". Every AI-driven outcome rests on a series of upstream choices:

  • Which data sets are authoritative?
  • Where is precision and repeatability required?
  • Where can human judgment or creativity guide outcomes?

He draws a parallel to early enterprise search. In the 2000s, employees were frustrated by a weak internal search while Google provided near-perfect answers at home. PageRank succeeded because it prioritized authoritative sources.

Today, generative AI brings that “wisdom of the crowds” experience to the enterprise, but internal data rarely matches the scale or rigor of public datasets. Without careful structuring, bias or error can slip in.

The solution is an agentic AI workflow. Deterministic or machine learning models handle the quantitative rigor. Generative AI translates insights into accessible language, interacting naturally with humans.

Rules, queries and human oversight are applied wherever precision is critical. Each stage of the decision supply chain is explicitly mapped, tested and validated.

Addressing the trust dilemma

This approach directly tackles the trust dilemma highlighted in the report.

By anchoring AI outputs in verifiable data and structured workflows, enterprises can increase both transparency and credibility. The result is systems employees actually trust, decisions that can be defended and insights that drive tangible business impact.

“We have to analyze what we want the end decision to be, back it up into the other decisions that support it and then apply the right AI or rules to service that decision supply chain,” Harris explains. “That, to us, really is the agentic AI workflow.”

Why this matters now

AI adoption continues to surge:

  • 65% of organizations already use AI.
  • 32% plan to adopt within the next year.
  • 81% use generative AI, versus 66% using traditional AI.
  • 52% are already experimenting with agentic AI.

Yet adoption alone is not enough. As the report notes: “Trust issues are more than just a philosophical concern; they’re a financial one… nearly half of AI potential is left untapped.”

We have to analyze what we want the end decision to be, back it up into the other decisions that support it and then apply the right AI or rules to service that decision supply chain. That, to us, really is the agentic AI workflow.Bryan Harris

The next phase of trust

Solving the trust dilemma isn’t about asking AI to do everything or hoping users will blindly follow it. But designing systems that combine accessibility with reliability will help.

The next wave of AI won’t be about dazzling demos or chatbots. It will be about orchestration: designing systems that know their limits, respect context and delegate tasks intelligently.

Generative AI becomes the interface; trusted systems provide the quantitative rigor. Agentic workflows bridge the two. AI that doesn’t just answer quickly, but answers correctly.

That shift in mindset – from monolith to ecosystem – is critical. AI doesn’t have to be all-knowing; it has to be well-architected.

For the good of society, businesses and employees: Trust in AI is imperative. Data and AI Impact Report

Explore the full Data and AI Impact Report to see how governance, trust and impact intersect worldwide

Share

About Author

Caslee Sims

I'm Caslee Sims, writer and editor for SAS Blogs. I gravitate toward spaces of creativity, collaboration and community. Whether it be in front of the camera, producing stories, writing them, sharing or retweeting them, I enjoy the art of storytelling. I share interests in sports, tech, music, pop culture among others.

Leave A Reply

Back to Top